40 research outputs found

    IMP: Indirect Memory Prefetcher

    Get PDF
    Machine learning, graph analytics and sparse linear algebra-based applications are dominated by irregular memory accesses resulting from following edges in a graph or non-zero elements in a sparse matrix. These accesses have little temporal or spatial locality, and thus incur long memory stalls and large bandwidth requirements. A traditional streaming or striding prefetcher cannot capture these irregular access patterns. A majority of these irregular accesses come from indirect patterns of the form A[B[i]]. We propose an efficient hardware indirect memory prefetcher (IMP) to capture this access pattern and hide latency. We also propose a partial cacheline accessing mechanism for these prefetches to reduce the network and DRAM bandwidth pressure from the lack of spatial locality. Evaluated on 7 applications, IMP shows 56% speedup on average (up to 2.3×) compared to a baseline 64 core system with streaming prefetchers. This is within 23% of an idealized system. With partial cacheline accessing, we see another 9.4% speedup on average (up to 46.6%).Intel Science and Technology Center for Big Dat

    Scheduling task dependence graphs with variable task execution times onto heterogeneous multiprocessors

    Get PDF
    ABSTRACT We present a statistical optimization approach for scheduling a task dependence graph with variable task execution times onto a heterogeneous multiprocessor system. Scheduling methods in the presence of variations typically rely on worst-case timing estimates for hard real-time applications, or average-case analysis for other applications. However, a large class of soft real-time applications require only statistical guarantees on latency and throughput. We present a general statistical model that captures the probability distributions of task execution times as well as the correlations of execution times of different tasks. We use a Monte Carlo based technique to perform makespan analysis of different schedules based on this model. This approach can be used to analyze the variability present in a variety of soft real-time applications, including a H.264 video processing application. We present two scheduling algorithms based on statistical makespan analysis. The first is a heuristic based on a critical path analysis of the task dependence graph. The other is a simulated annealing algorithm using incremental timing analysis. Both algorithms take as input the required statistical guarantee, and can thus be easily re-used for different required guarantees. We show that optimization methods based on statistical analysis show a 25-30% improvement in makespan over methods based on static worst-case analysis

    GenBase: A Complex Analytics Genomics Benchmark

    Get PDF
    This paper introduces a new benchmark, designed to test database management system (DBMS) performance on a mix of data management tasks (joins, filters, etc.) and complex analytics (regression, singular value decomposition, etc.) Such mixed workloads are prevalent in a number of application areas, including most science workloads and web analytics. As a specific use case, we have chosen genomics data for our benchmark, and have constructed a collection of typical tasks in this area. In addition to being representative of a mixed data management and analytics workload, this benchmark is also meant to scale to large dataset sizes and multiple nodes across a cluster. Besides presenting this benchmark, we have run it on a variety of storage systems including traditional row stores, newer column stores, Hadoop, and an array DBMS. We present performance numbers on all systems on single and multiple nodes, and show that performance differs by orders of magnitude between the various solutions. In addition, we demonstrate that most platforms have scalability issues. We also test offloading the analytics onto a coprocessor. The intent of this benchmark is to focus research interest in this area; to this end, all of our data, data generators, and scripts are available on our web site

    Galactos: Computing the Anisotropic 3-Point Correlation Function for 2 Billion Galaxies

    Get PDF
    The nature of dark energy and the complete theory of gravity are two central questions currently facing cosmology. A vital tool for addressing them is the 3-point correlation function (3PCF), which probes deviations from a spatially random distribution of galaxies. However, the 3PCF's formidable computational expense has prevented its application to astronomical surveys comprising millions to billions of galaxies. We present Galactos, a high-performance implementation of a novel, O(N^2) algorithm that uses a load-balanced k-d tree and spherical harmonic expansions to compute the anisotropic 3PCF. Our implementation is optimized for the Intel Xeon Phi architecture, exploiting SIMD parallelism, instruction and thread concurrency, and significant L1 and L2 cache reuse, reaching 39% of peak performance on a single node. Galactos scales to the full Cori system, achieving 9.8PF (peak) and 5.06PF (sustained) across 9636 nodes, making the 3PCF easily computable for all galaxies in the observable universe.Comment: 11 pages, 7 figures, accepted to SuperComputing 201

    Exploiting NVM in Large-scale Graph Analytics

    Get PDF
    Data center applications like graph analytics require servers with ever larger memory capacities. DRAM scaling, how- ever, is not able to match the increasing demands for ca- pacity. Emerging byte-addressable, non-volatile memory technologies (NVM) offer a more scalable alternative, with memory that is directly addressable to software, but at a higher latency and lower bandwidth. Using an NVM hardware emulator, we study the suitabil- ity of NVM in meeting the memory demands of four state of the art graph analytics frameworks, namely Graphlab, Galois, X-Stream and Graphmat. We evaluate their perfor- mance with popular algorithms (Pagerank, BFS, Triangle Counting and Collaborative filtering) by allocating mem- ory exclusive from DRAM (DRAM-only) or emulated NVM (NVM-only). While all of these applications are sensitive to higher latency or lower bandwidth of NVM, resulting in perfor- mance degradation of up to 4X with NVM-only (compared to DRAM-only), we show that the performance impact is somewhat mitigated in the frameworks that exploit CPU memory-level parallelism and hardware prefetchers. Further, we demonstrate that, in a hybrid memory system with NVM and DRAM, intelligent placement of application data based on their relative importance may help offset the overheads of the NVM-only solution in a cost-effective man- ner (i.e., using only a small amount of DRAM). Specifically, we show that, depending on the algorithm, Graphmat can achieve close to DRAM-only performance (within 1.2X) by placing only 6.7% to 31.5% of its total memory footprint in DRA
    corecore